perm filename CHAP2[4,KMC] blob
sn#090253 filedate 1974-03-05 generic text, type T, neo UTF8
00100 EXPLANATIONS AND MODELS
00200 The Nature of Explanation
00300 It is perhaps as difficult to explain explanation itself as
00400 it is to explain anything else. (Nothing, except everything, explains
00500 anything). The explanatory practices of different sciences differ
00600 widely but all share the purpose of someone attempting to answer a
00700 why-how-what-etc. question about a situation, event, episode,
00800 object or phenomenon. Thus scientific explanation implies a dialogue
00900 whose participants share some interests, beliefs, and values. Some
01000 consensus must exist about what are admissible and appropriate
01100 questions and answers. The participants must have some degree of
01200 agreement on what is a sound and reasonable question and what is a
01300 relevant, intelligible, and (believed) correct answer. The explainer
01400 tries to satisfy the questioner's curiosity by making comprehensible
01500 why something is the way it is. Depending on what mystifies the
01600 questioner,the answer may be a definition, an example, a synonym, a
01700 story, a theory, a model-description, etc. The answer attempts to
01800 satisfy curiosity by settling belief, at least temporarily, since
01900 scientific beliefs are corrigible and revisable. A scientific
02000 explanation aims at convergence of belief in the relevant expert
02100 community.
02200 Suppose a man dies and a questioner (Q) asks an explainer (E):
02300 Q: Why did the man die?
02400 One answer might be:
02500 E: Because he swallowed cyanide.
02600 This explanation might be sufficient to satisfy Q's curiosity and he
02700 and he stops asking further questions. Or he might continue:
02800 Q. Why did the cyanide kill him?
02900 and E replies:
03000 E: Anyone who swallows cyanide dies.
03100 This explanation appeals to a universal generalization under which is
03200 subsumed the particular fact of this man's death. Subsumptive
03300 explanations, however, satisfy some questioners but not others who,
03400 for example, might want to know about the physiological mechanisms
03500 involved.
03600 Q: How does cyanide work in causing death?
03700 E: It stops respiration so the person dies from lack of oxygen.
03800 If Q has biochemical interests he might inquire further:
03900 Q:What is cyanide's mechanism of drug action on the
04000 respiratory center?
04100 The last two questions refer to causes. When human action is
04200 to be explained, confusion easily arises between appealing to
04300 physical, mechanical causes and appealing to symbolic-level reasons
04400 which constitute learned, acquired strategies seemingly of an
04500 ontological order different from causes. (See Toulmin, 1971).
04600 The phenomena of the paranoid mode can be found associated
04700 with a variety of physical disorders. For example, paranoid
04800 thinking can be found in patients with head injuries,
04900 hyperthyroidism, hypothyroidism, uremia, pernicious anemia, cerebral
05000 arteriosclerosis, congestive heart failure, malaria, epilepsy and
05100 drug intoxications caused by alcohol, amphetamines, marihuana and
05200 LSD. In these cases the paranoid mode is not a primary disorder but a
05300 disorder in processing information secondary to another underlying
05400 disorder. To account for the association of paranoid thought with
05500 these physical states of illness, a psychological theorist might be
05600 tempted to hypothesize that a purposive cognitive system would
05700 attempt to explain ill health by attributing it to other malevolent
05800 human agents. But before making such an explanatory move, we must
05900 consider the at-times elusive distinction between reasons and causes
06000 in explanations of human behavior.
06100 One view of the association of the paranoid mode with
06200 physical disorders might be that the physical illness simply causes
06300 the paranoia, through some unknown mechanism, at a physical level
06400 beyond the influence of deliberate self-direction and self-control.
06500 That is, the resultant paranoid mode represents something that
06600 happens to a person as victim, not something that he does as an
06700 active agent. Mechanical causes thus provide one type of reason in
06800 explaining behavior.
06900 Another view is that the paranoid mode can be explained in
07000 terms of symbolically-represented reasons consisting of rules and
07100 patterns of rules which specify an agent's intentions and beliefs.
07200 In a given situation does a person as an agent recognize, monitor and
07300 control what he is doing or trying to do? Or does it just happen
07400 to him automatically without conscious deliberation?
07500 This question raises a third view, namely that unrecognized
07600 symbolic-structure reasons, aspects of the symbolic representation
07700 which are sealed off from reflective deliberation, can function like
07800 mechanical causes in that they are inaccessible to voluntary control.
07900 If they can be brought to consciousness, such reasons can sometimes
08000 be modified voluntarily by the agent, who, using ordinary language as
08100 its own metalanguage, can reflexively talk to and instruct himself.
08200 This second-order monitoring and control through language contrasts
08300 with an agent's inability to modify mechanical causes or symbolic
08400 reasons which lie beyond the influence of self-criticism and
08500 self-emancipation carried out through linguistically mediated
08600 argumentation. Timeworn conundrums about concepts of free-will,
08700 determinism, responsibility, consciousness and the powers of mental
08800 action here plague us unless we can take advantage of a computer
08900 analogy in which a clear and useful distinction is drawn between
09000 levels of mechanical hardware and symbolically-represented programs.
09100 This important distinction will be elaborated shortly.
09200
09300 Each of these three views provides a serviceable perspective
09400 depending on how a disorder is to be explained and corrected. When
09500 paranoid processes occur during amphetamine intoxication, they can be
09600 viewed as biochemically caused and beyond the patient's ability to
09700 control volitionally through internal self-correcting dialogues. When
09800 a paranoid moment occurs in a normal person, it can be viewed as
09900 involving a symbolic misinterpretation. If the paranoid
10000 misinterpretation is recognized as unjustified, a normal person has
10100 the emancipatory power to revise or reject it through internal
10200 dialogue. Between these extremes of drug-induced paranoid states and
10300 the self-correctable paranoid moments of the normal person, lie cases
10400 of paranoid personalities, paranoid reactions and the paranoid mode
10500 associated with the major psychoses (schizophrenic and
10600 manic-depressive).
10700 One opinion has it that the major psychoses are a consequence
10800 of unknown physical causes and are, therefore, beyond deliberate
10900 voluntary control. But what are we to conclude about paranoid
11000 personalities and paranoid reactions where no physical "hardware"
11100 disorder is detectable or suspected? Are such persons to be
11200 considered patients to whom something is mechanically happening at
11300 the physical level or are they agents whose behavior is a consequence
11400 of what they do at the symbolic level? Or are they both agent and
11500 patient depending on how one views the self-modifiability of their
11600 symbolic processing? In these perplexing cases we shall take the
11700 position that in normal, neurotic and characterological paranoid
11800 modes, the psychopathlogy represents something that happens to a man
11900 as a consequence of what he has experientially undergone, of
12000 something he now does, and something he now undergoes. Thus he is
12100 both agent and victim whose symbolic processes have powers to do and
12200 liabilities to undergo. His liabilities are reflexive in that he is
12300 both victim of, and can succumb to, his own symbolic structures.
12400
12500 From this standpoint I would postulate a duality at the
12600 symbolic level between reasons and causes. That is, a consciously
12700 unrecognized reason can operate like a mechanical cause in that it is
12800 inaccessible to voluntary modification by symbolic reprogramming. It
12900 is not reasons themselves which operate as causes but their execution
13000 which serves as a determinant of behavior. Human symbolic behavior
13100 is non-determinate to the extent that it is autonomously
13200 self-determinate. Thus the power to select among alternatives, to
13300 make some decisions freely and to change one's mind is non-illusory.
13400 When a reason is recognized to function as a cause and is accessible
13500 to self-monitoring (the monitoring of monitoring), emancipation from
13600 it can occur through change or rejection of belief. In this sense an
13700 at least two-levelled system is self-changeable and
13800 self-emancipatory, within limits.
13900 Explanations both in terms of causes and reasons can be
14000 indefinitely extended and endless questions can be asked at each
14100 level of analysis. Just as the participants in explanatory dialogues
14200 decide what is taken to be problematic, so they also determine the
14300 termini for a series of questions and answers. Each discipline has
14400 its characteristic stopping points and boundaries.
14500 Underlying such explanatory dialogues are larger and smaller
14600 constellations of concepts which are taken for granted as
14700 nonproblematic background. Hence in considering the strategies of
14800 the paranoid mode "it goes without saying" that any living teleonomic
14900 system ,as the larger constellation , strives for maintenance and
15000 expansion of life. Also it should go without saying that, at a lower
15100 level, ion transport takes place through nerve-cell membranes. Every
15200 function of an organism can be viewed as governing a subfunction
15300 beneath and depending on a transfunction above which calls it into
15400 play for a purpose.
15500 There are many alternative ways of explaining just as there
15600 are many alternative ways of describing. An explanation is geared to
15700 some level of what the dialogue participants take to be the
15800 fundamental structures and processes under consideration. Since in
15900 psychiatry we cope with patients' problems using mainly
16000 symbolic-conceptual techniques,(it is true that the pill, the knife,
16100 and electricity are also available), we are interested in aspects of
16200 human conduct which can be explained, understood, and modified at a
16300 symbol-processing level. Psychiatrists need theoretical symbolic
16400 systems from which their clinical experience can be logically derived
16500 to interpret the case histories of their patients. Otherwise they are
16600 faced with mountains of indigestible data and dross. To quote
16700 Einstein: "Science is an attempt to make the chaotic diversity of our
16800 sense experience correspond to a logically uniform system of thought
16900 by correlating single experiences with the theoretic structure."
17000
17100 The Symbol Processing Viewpoint
17200
17300 Segments and sequences of human behavior can be studied from
17400 many perspectives. I shall view sequences of paranoid symbolic
17500 behavior from an information-processing standpoint in which persons
17600 are viewed as symbol users. For a more complete explication and
17700 justification of this perspective , see Newell (1973) and Newell and
17800 Simon (1972).
17900 In brief, from this vantage point we define information as
18000 knowledge in a symbolic code. Symbols are considered to be
18100 representations of experience classified as objects, events,
18200 situations and relations. A symbolic process is a symbol-manipulating
18300 activity posited to account for observable symbolic behavior such as
18400 linguistic interaction. Under the term "symbol-processing" I include
18500 the seeking, manipulating and generating of symbols.
18600 Symbol-processing explanations postulate an underlying
18700 structure of hypothetical processes, functions, strategies, or
18800 directed symbol-manipulating procedures, having the power to produce
18900 and being responsible for observable patterns of phenomena. Such a
19000 structure offers an ethogenic (ethos = conduct or character, genic =
19100 generating) explanation for sequences or segments of symbolic
19200 behavior. (See Harre and Secord,1972). From an ethogenic viewpoint,
19300 we can posit processes, functions, procedures and strategies as being
19400 responsible for and having the power to generate the symbolic
19500 patterns and sequences characteristic of the paranoid mode.
19600 "Strategies" is perhaps the best general term since it implies ways
19700 of obtaining an objective - ways which have suppleness and pliability
19800 since choice of application depends on circumstances. However,
19900 I shall use all these terms interchangeably.
20000
20100 Symbolic Models
20200 Theories and models share many functions and are often
20300 considered equivalent. One important distinction, however, lies in
20400 the fact that a theory states a subject has a certain structure but
20500 does not exhibit that structure in itself. (See Kaplan,1964). In the
20600 case of computer simulation models there exists a further useful
20700 distinction. Computer simulation models which have the ability to
20800 converse in natural language using teletypes, actualize or realize a
20900 theory in the form of a dialogue algorithm. In contrast to a verbal,
21000 pictorial or mathematical representation, such a model, as a result
21100 of interaction, changes its states over time and ends up in a state
21200 different from its initial state.
21300 Einstein once remarked, in contrasting the act of description
21400 with what is described, that it is not the function of science to
21500 give the taste of the soup. Today this view would be considered
21600 unnecessarily restrictive. For example, a major test for synthetic
21700 insulin is whether it reproduces the effects, or at least some of the
21800 effects (such as lowering blood sugar), shown by natural insulin.
21900 Similarly, to test whether a simulation is successful, its effects
22000 must be compared with the effects produced by the naturally-occuring
22100 subject-process being modelled. An interactive simulation model
22200 which attempts to reproduce sequences of experienceable reality,
22300 offers an interviewer a first-hand experience with a concrete case.
22400 In constructing a computer simulation, a theory is modelled to
22500 discover a sufficiently rich structure of hypotheses and assumptions
22600 to generate the observable subject-behavior under study. A
22700 dialogue algorithm allows an observer to interact with a concrete
22800 specimen of a class in detail. In the case of our model, the level of
22900 detail is the level of the symbolic behavior of conversational
23000 language. This level is satisfying to a clinician since he can
23100 compare the model's behavior with its natural human counterparts
23200 using familiar skills of clinical dialogue. Communicating with the
23300 paranoid model by means of teletype, an interviewer can directly
23400 experience for himself a sample of the type of impaired social
23500 relationship which develops with someone in a paranoid mode.
23600 An algorithm composed of symbolic computational procedures
23700 converts input symbolic structures into output symbolic structures
23800 according to certain principles. The modus operandi of such a
23900 symbolic model is simply the workings of an algorithm when run on a
24000 computer. At this level of explanation, to answer a "why" question
24100 means to provide an algorithm which makes explicit how symbolic
24200 structures collaborate, interplay and interlock - in short, how they
24300 are organized to generate patterns of manifest phenomena.
24400
24500 To simulate the sequential input-output behavior of a system
24600 using symbolic computational procedures, one writes an algorithm
24700 which, when run on a computer, produces symbolic behavior resembling
24800 that of the subject system being simulated. (Colby,1973) The
24900 resemblance is achieved through the workings of the algorithm, an
25000 organization of symbol-manipulating procedures which are
25100 ethogenically responsible for the characteristic observable behavior
25200 at the input-output level. Since we do not know the structure of the
25300 "real" simulative processes used by the mind-brain, our posited
25400 structure stands as an imagined theoretical analogue, a possible and
25500 plausible organization of processes analogous to the unknown
25600 processes and serving as an attempt to explain their workings. A
25700 simulation model is thus deeper than a structureless black-box
25800 explanation because it postulates functionally equivalent processes
25900 inside the box to account for outwardly observable patterns of
26000 behavior. A simulation model constitutes an interpretive
26100 explanation in that it makes intelligible the connections between
26200 external input, internal states and output by positing intervening
26300 symbol-processing procedures operating between symbolic input and
26400 symbolic output. To be illuminating, a description of the model
26500 should make clear why and how it reacts as it does under various
26600 circumstances.
26700 Citing a universal generalization to explain an individual's
26800 behavior is unsatisfactory to a questioner who is interested in what
26900 powers and liabilities are latent behind manifest phenomena. To say
27000 "x is nasty because x is paranoid and all paranoids are nasty" may be
27100 relevant, intelligible and correct. But another type of explanation
27200 is possible: a model-explanation referring to a structure which can
27300 account for "nasty" behavior as a consequence of input and internal
27400 states of a system. A model explanation specifies particular
27500 antecedents and processes through which antecedents generate the
27600 phenomena. An ethogenic approach to explanation assumes perceptible
27700 phenomena display the regularities and nonrandom irregularities they
27800 do because of the nature of an underlying structure inaccessible to
27900 inspection. The posited theoretical structure is an idealized
28000 analogue to the unobservable structure in persons.
28100 In attempting to explain human behavior, principles are
28200 involved in addition to those accounting for natural order. "Nature
28300 entertains no opinions about us", said Nietzsche. But human natures
28400 do, and therein lies a source of complexity for the understanding
28500 of human conduct. Until the first quarter of the 20th century,
28600 natural sciences were guided by the Newtonian ideal of perfect
28700 process knowledge about inanimate objects whose behavior could be
28800 subsumed under lawlike generalizations. When a deviation from a law
28900 was noticed, it was the law which was subsequently modified, since by
29000 definition physical objects did not have the power to break laws.
29100 When the planet Mercury was observed to deviate from the orbit
29200 predicted by Newtonian theory, no one accused the planet of being an
29300 intentional agent disobeying a law. Instead it was suspected that
29400 something was incorrect about the theory.
29500 This approach using subsumptive explanation is the acceptable
29600 norm in many fields. It is seldom satisfactory in accounting for
29700 particular sequences of behavior in living purposive systems. When
29800 physical bodies fall in the macroscopic world, few find it
29900 scientifically useful to posit that bodies have an intention to fall.
30000 But in our imagery of living systems, especially in our
30100 Menschanschauung, the ideal explanatory practice is teleonomically
30200 Aristotelian, utilizing a concept of intention. (For a thorough
30300 discussion of purpose and intentionality see Boden,1972).
30400 Consider a man participating in a high-diving contest. In
30500 falling towards the water he accelerates at the rate of 32 feet per
30600 second. Viewing the man simply as a falling body, we explain his rate
30700 of fall by appealing to a physical law. Viewing the man as a human
30800 intentionalistic agent, we explain his dive as the result of an
30900 intention to dive in a certain way in order to win the diving
31000 contest. His conduct (in contrast to mere movement) involves an
31100 intended following of certain conventional rules for what is judged
31200 by humans to constitute, say, a swan dive. Suppose part-way down he
31300 chooses to change his position in mid-air and enter the water
31400 thumbing his nose at the judges. He cannot disobey the law of falling
31500 bodies but he can disobey or ignore the rules of diving. He can also
31600 make a gesture which expresses disrespect and which he believes will
31700 be interpreted as such by the onlookers. Our diver breaks a rule
31800 for diving but follows another rule which prescribes gestural action
31900 for insulting behavior. To explain the actions of diving and
32000 nose-thumbing, therefore, we would appeal, not to laws of natural
32100 order, but to an additional order, to principles of human order.
32200 This order is superimposed on laws of natural order and takes into
32300 account (1)standards of appropriate action in certain situations and
32400 (2) the agent's inner considerations of intention, belief and value
32500 which he finds compelling from his point of view. In this type of
32600 explanation the explanandum, that which is being explained, is the
32700 agent's informed actions, not simply his movements. When a human
32800 agent performs an action in a situation, we can ask: is the action
32900 appropriate to that situation and if not, why did the agent believe
33000 his action to be called for?
33100 Symbol-processing explanations of human conduct rely on
33200 concepts of intention, belief, action, affect, etc. Characteristic
33300 of early stages of explanation, the terms for these concepts are
33400 close to the terms of ordinary language. It is also important to note
33500 that such terms are commonly utilized in describing computer
33600 algorithms which follow rules in striving to achieve goals. The
33700 advantage is that in an algorithm these ordinary language terms can
33800 be explicitly defined and represented.
33900 Psychiatry deals with the practical concerns of inappropriate
34000 action, belief, etc. on the part of a patient. The patient's behavior
34100 may be inappropriate to onlookers since it represents a lapse from
34200 the expected, a contravention of the human order. It may even appear
34300 this way to the patient in monitoring and directing himself.
34400 Sometimes, however, the patient's behavior does not appear anomalous
34500 to himself. He maintains that anyone who understands his point of
34600 view, who conceptualizes situations as he does from the inside, would
34700 consider his outward behavior appropriate and justified. What he does
34800 not understand or accept is that his inner conceptualization is
34900 mistaken and represents a misinterpretation of the events of his
35000 experience.
35100 The model to be presented in the sequel constitutes an
35200 attempt to explain some regularities and particular occurrences of
35300 symbolic (conversational) paranoid behavior observable in the
35400 clinical situation of a psychiatric interview. The explanation is
35500 at the symbol-processing level of linguistically communicating agents
35600 and is cast in the form of a dialogue algorithm. Like all
35700 explanations, it is tentative, incomplete, and does not claim to
35800 represent the only conceivable structure of processes .
35900
36000 The Nature of Algorithms
36100
36200 Theories can be presented in various forms: prose essays,
36300 mathematical equations and computer programs. To date, most
36400 theoretical explanations in psychiatry and psychology have consisted
36500 of natural language essays with all their well-known vagueness and
36600 ambiguities. Many of these formulations have been untestable, not
36700 because relevant observations were lacking, but because it was
36800 unclear what the essay was really saying. Clarity is needed.
36900 Science may begin with metaphors, but it should try to end up with
37000 algorithms.
37100 Another way of formulating psychological theories is now
37200 available in the form of symbol-processing algorithms, computer
37300 programs, which have the virtue of being explicit in their
37400 articulation, traceable in their operations, and which can be run on
37500 a computer to test their internal consistency and external
37600 correspondence with the data of observation. The subject-matter (or
37700 subject) of a model is what it is a model of; the source of a model
37800 is what it is based upon. Since we do not know the "real" algorithms
37900 used by people, we construct a theoretical model, based upon computer
38000 algorithms. This model represents a partial analogy. (Harre,
38100 1970). The partial analogy is made at the symbol-processing level,
38200 not at the hardware level. A functional, computational or
38300 procedural equivalence is being
38400 postulated. The question then becomes one of categorizing the
38500 extent of the equivalence. A beginning (first-approximation)
38600 functional equivalence might be defined as indistinguishability at
38700 the level of observable I-O pairs. A stronger equivalence would
38800 consist of indistinguishability at inner I-O levels. That is, there
38900 exists a correspondence between what is being done and how it is
39000 being done at a given operational level.
39100 An algorithm represents an organization of symbol-processing
39200 strategies or functions which represent an "effective procedure". An
39300 effective procedure consists of three components:
39400
39500 (1) A programming language in which procedural rules of
39600 behavior can be rigorously and unambiguously specified.
39700 (2) An organization of procedural rules which constitute
39800 the algorithm.
39900 (3) A machine processor which can rapidly and reliably carry
40000 out the processes specified by the procedural rules.
40100 The specifications of (2), written in the formally defined
40200 programming language of (1), are termed an algorithm or program
40300 whereas (3) involves a computer as the machine processor - a set of
40400 deterministic physical mechanisms which can perform the operations
40500 specified in the algorithm. The algorithm is called "effective"
40600 because it actually works, performing as intended and producing the
40700 effects desired by the model builders when run on the machine
40800 processor.
40900 A simulation model is composed of procedures taken to be
41000 analogous to imperceptible and inaccessible procedures of the
41100 mind-brain. We are not claiming they ARE analogous, we are MAKING
41200 them so. The analogy being drawn here is between specified processes
41300 and their generating systems. Thus, in comparing mental processes to
41400 computational processes, we might assert:
41500
41600 mental process computational process
41700 --------------:: ----------------------
41800 brain hardware computer hardware and
41900 and programs programs
42000
42100 Many of the classical mind-brain problems arose because there
42200 did not exist a familiar, well-understood analogy to help people
42300 imagine how a system could work having a clear separation between its
42400 hardware descriptions and its program descriptions. With the advent
42500 of computers and programs some mind-brain perplexities disappear.
42600 (Colby,1971). The analogy is not simply between computer hardware
42700 and brain wetware. We are not comparing the structure of neurons
42800 with the structure of transistors; we are comparing the
42900 organization of symbol-processing procedures in an algorithm with
43000 symbol-processing procedures of the mind-brain. The central nervous
43100 system contains a representation of the experience of its holder. A
43200 model builder has a conceptual representation of that representation
43300 which he demonstrates in the form of a model. Thus the model is a
43400 demonstration of a representation of a representation.
43500 An algorithm can be run on a computer in two forms, a
43600 compiled version and an interpreted version. In the compiled version
43700 a preliminary translation has been made from the higher-level
43800 programming language (source language) into lower-level machine
43900 language (object language) which controls the on-off state of
44000 hardware switching devices. When the compiled version is run, the
44100 instructions of the machine-language code are directly executed. In
44200 the interpreted version each high-level language instruction is first
44300 translated into machine language, executed, and then the process is
44400 repeated with the next instruction. One important aspect of the
44500 distinction between compiled and interpreted versions is that the
44600 compiled version, now written in machine language, is not easily
44700 accessible to change using the higher-level language. In order to
44800 change the program, the interpreted version must be modified in the
44900 source language and then re-compiled into the object language. The
45000 rough analogy with ever-changing human symbolic behavior lies in
45100 suggesting that modifications require change at the source-language
45200 level. Otherwise compiled algorithms are inaccessible to second order
45300 monitoring and modification.
45400 Since we are taking running computer programs as a source of
45500 analogy for a paranoid model, logical errors or pathological behavior
45600 on the part of such programs are of interest to the
45700 psychopathologist. These errors can be ascribed to the hardware
45800 level, to the interpreter, or to the programs which the interpreter
45900 executes. Different remedies are required at different levels. If
46000 the analogy is to be clinically useful in the case of human
46100 pathological behavior, it will become a matter of influencing
46200 symbolic behavior with the appropriate techniques.
46300 Since the algorithm is written in a programming language, it
46400 is hermetic except to a few people, who in general do not enjoy
46500 reading other people's code. Hence the intelligibility and
46600 scrutability requirement for explanations must be met in other ways.
46700 In an attempt to open the algorithm to scrutiny I shall describe the
46800 model in detail using diagrams and interview examples profusely.
46900
47000
47100 Analogy
47200
47300 I have stated that an interactive simulation model of
47400 symbol-manipulating processes reproduces sequences of symbolic
47500 behavior at the level of linguistic communication. The reproduction
47600 is achieved through the operations of an algorithm consisting of an
47700 organization of hypothetical symbol-processing strategies or
47800 procedures which can generate the I-O behavior of the subject-
47900 processes under investigation. The algorithm is an "effective
48000 procedure" in the sense it really works in the manner intended by the
48100 model-builders. In the model to be described, the paranoid algorithm
48200 generates linguistic I-O behavior typical of patients whose
48300 symbol-processing is dominated by the paranoid mode. Comparisons can
48400 be made between samples of the I-O behaviors of patients and model.
48500 But the analogy is not to be drawn at this level. Mynah birds and
48600 tape recorders also reproduce human linguistic behavior, but no one
48700 believes the reproduction is achieved by powers analogous to human
48800 powers. Given that the manifest outermost I-O behavior of the model
48900 is indistinguishable from the manifest outward I-O behavior of
49000 paranoid patients, does this imply that the hypothetical underlying
49100 processes used by the model are analogous to (or perhaps the same
49200 as?) the underlying processes used by persons in the paranoid mode?
49300 This deep and far-reaching question should be approached with caution
49400 and only when we are first armed with some clear notions about
49500 analogy, similarity, faithful reproduction, indistinguishability and
49600 functional equivalence.
49700 In comparing two things (objects, systems or processes ) one
49800 can cite properties they have in common (positive analogy),
49900 properties they do not share (negative analogy) and properties as to
50000 which we do not know whether they are positive or negative (neutral
50100 analogy). (See Hesse,1966). No two things are exactly alike in every
50200 detail. If they were identical in respect to all their properties
50300 then they would be copies. If they were identical in every respect
50400 including their spatio-temporal location we would say we have only
50500 one thing instead of two. Everything resembles something else and
50600 maybe everything else, depending upon how one cites properties.
50700 In an analogy a similarity relation is evoked. "Newton did
50800 not show the cause of the apple falling but he showed a similitude
50900 between the apple and the stars."(D`Arcy Thompson). Huygens suggested
51000 an analogy between sound waves and light waves in order to understand
51100 something less well-understood (light) in terms of something better
51200 understood (sound). To account for species variation, Darwin
51300 postulated a process of natural selection. He constructed an
51400 analogy from two sources, one from artificial selection as practiced
51500 by domestic breeders of animals and one from Malthus' theory of a
51600 competition for existence in a population increasing geometrically
51700 while its resources increase arithmetically. Bohr's model of the atom
51800 offered an analogy between solar system and atom. These well-known
51900 historical examples should be sufficient here to illustrate the role
52000 of analogies in theory construction. Analogies are made in respect
52100 to those properties which constitute the positive and neutral
52200 analogy. The negative analogy is ignored. Thus Bohr's model of
52300 the atom as a miniature planetary system was not intended to suggest
52400 that electrons possessed color or that planets jumped out of their
52500 orbits.
52600
52700 Functional Equivalence
52800
52900 When human symbolic processes are the subject of a simulation
53000 model, we draw the analogy from two sources, symbolic computation and
53100 psychology. The analogy made is between systems known to have the
53200 power to process symbols, namely, persons and computers. The
53300 properties compared in the analogy are obviously not physical or
53400 morphological such as blood and wires, but functional and procedural.
53500 We want to assume that not-well-understood mental procedures in a
53600 person are similar to the more accessible and better understood
53700 procedures of symbol-processing which take place in a computer. The
53800 analogy is one of functional or procedural equivalence. (For a
53900 further account of functional analysis see Hempel, 1965). Mousetraps
54000 are functionally equivalent. There exists a large set of physical
54100 mechanisms for catching mice. The term "mousetrap" says what each
54200 member of the set has in common. Each takes as input a live mouse
54300 and yields as output a dead one. Systems equivalent from one point of
54400 view may not be equivalent from another (Fodor,1968).
54500 If model and human are indistinguishable at the manifest
54600 level of linguistic I-O pairs, then they can be considered equivalent
54700 at that level. If they can be shown to be indistinguishable at
54800 more internal symbolic levels, then a stronger equivalence exists.
54900 How stringent and how extensive are the demands for equivalence to
55000 be? Must the correspondence be point-to-point or can it be the more
55100 global system-to-system? Must there be point-to-point correspondences
55200 at every level? What is to count as a point and what are the levels?
55300 Procedures can be specified and ostensively pointed to in an
55400 algorithm, but how can we point to unobservable symbolic processes in
55500 a person? There is an inevitable limit to scrutinizing the
55600 "underlying" processes of the world. Einstein likened this
55700 situation to a man explaining the behavior of a watch without opening
55800 it: "He will never be able to compare his picture with the real
55900 mechanism and he cannot even imagine the possibility or meaning of
56000 such a comparison".
56100 In constructing an algorithm one puts together an
56200 organization of collaborating functions or procedures. A function
56300 takes some symbolic structure as input and yields some symbolic
56400 structure as output. Two computationally equivalent functions, having
56500 the same input and yielding the same output, can differ "inside" the
56600 function at the instruction level.
56700 Consider an elementary programming problem which students in
56800 symbolic computation are often asked to solve. Given a list L of
56900 symbols, L=(A B C D), as input, construct a function or procedure
57000 which will convert this list to the list RL in which the order of the
57100 symbols is reversed, i.e. RL=(D C B A). There are many ways of
57200 solving this problem and the code of one student may differ greatly
57300 from that of another at the level of individual instructions. But the
57400 differences of such details are irrelevant. What is significant is
57500 that the solutions make the required conversion from L to RL. The
57600 correct solutions will all be computationally equivalent at the
57700 input-output level since they take the same symbolic structures as
57800 input and produce the same symbolic output.
57900 If we propose that an algorithm we have constructed is
58000 functionally equivalent to what goes on in humans when they process
58100 symbolic structures, how can we justify this position ?
58200 Indistinguishability tests at, say, the linguistic level provide
58300 evidence only for beginning equivalence. We would like to be able to
58400 have access to the underlying processes in humans the way we can with
58500 algorithms. (Admittedly, we do not directly observe processes at all
58600 levels but only the products of some). The difficulty lies in
58700 identifying, making accessible, and counting processes in human
58800 heads. Many symbol-processing experiments are now being designed
58900 and carried out. We must have great patience with this type of
59000 experimental information-processing psychology.
59100 In the meantime, besides first-approximation I-O equivalence
59200 and plausibility arguments, one might appeal to extra-evidential
59300 support offering parallelisms from neighboring scientific domains.
59400 One can offer analogies between what is known to go on at a molecular
59500 level in the cells of living organisms and what goes on in an
59600 algorithm. For example, a DNA molecule in the nucleus of a cell
59700 consists of an ordered sequence (list) of nucleotide bases (symbols)
59800 coded in triplets termed codons (words). Each element of the codon
59900 specifies which amino acid during protein synthesis is to be linked
60000 into the chain of polypeptides making up the protein. The codons
60100 function like instructions in a programming language. Some codons are
60200 known to operate as terminal symbols analogous to symbols in an
60300 algorithm which terminate the end of a list. If, as a result of a
60400 mutation, a nucleotide base is changed, the usual protein will not be
60500 synthesized. The polypeptide chain resulting may have lethal or
60600 trivial consequences for the organism depending on what must be
60700 passed on to other processes which require polypeptides to be handed
60800 over to them. Similarly in an algorithm. If a symbol or word in a
60900 procedure is incorrect, the procedure cannot operate in its intended
61000 manner. Such a result may be lethal or trivial to the algorithm
61100 depending on what information the faulty procedure must pass on at
61200 its interface with other procedures in the overall organization. Each
61300 procedure in an algorithm is embedded in an organization of
61400 collaborating procedures just as are functions in living organisms.
61500 We know that at the molecular level of living organisms there exists
61600 a process such as serial progression along a nucleotide sequence,
61700 which is analogous to stepping down a list in an algorithm. Further
61800 analogies can be made between point mutations in which DNA bases can
61900 be inserted, deleted, substituted or reordered and symbolic
62000 computation in which the same operations are commonly carried out on
62100 symbolic structures. Such analogies are interesting as
62200 extra-evidential support but obviously closer linkages are needed
62300 between the macro-level of symbolic processes and the micro-level of
62400 molecular information-processing within cells.
62500 To obtain evidence for the acceptability of a model as
62600 faithful or authentic, empirical tests are utilized as validation
62700 procedures. Such tests should also tell us which is the best among
62800 alternative versions of a family of models and, indeed among
62900 alternative families of models. Scientific explanations do not stand
63000 alone in isolation. They are evaluated relative to rival contenders
63100 for the position of "best available". Once we accept a theory or
63200 model as the best available, can we be sure it is correct or true? We
63300 can never know with certainty. Theories and models are provisional
63400 and partial approximations to nature destined in time to become
63500 abandoned and superseded by better ones.